Вопрос

I have the following code which tries to find out standard deviation of GreyChannel of a picture:

images = Magick::ImageList.new('mypic.jpg')

images.each do |image|
  grey = image.quantize(number_colors=256, colorspace=Magick::GRAYColorspace)
  p grey.inspect # -> mypic.jpg JPEG 69x120 69x120+0+0 DirectClass 8-bit
  p "GREY CHANNEL DEPTH"
  p.grey.channel_depth(Magick::GrayChannel) # -> 16
  p.channel_mean(Magick::GrayChannel) # -> [26929.525603864735, 17142.094885263676]
end

My question is: Why are these values so big? [26929.525603864735, 17142.094885263676]

I mean, those are mean and standard deviation, but how would I convert them into the range of 0-1 ? Should I divide by 2^16 or 2^8? It's confusing because even though the pictures seems to be 8-bit, the channel depth is 16.

I have also notice that for another picture, the channel depth is 8, but the values of p.channel_mean are [35394.21133333333, 30093.66624626083]

Это было полезно?

Решение

The image may be quantized down to eight bits but you're still working with ImageMagick's internal format. The depth of the internal format is in Magick::QuantumDepth:

QuantumDepth
The number of bits in a quantum.

and AFAIK that's a compile-time constant for the underlying ImageMagick libraries. A QuantumDepth of 16 corresponds to a Magick::QuantumRange of 65535 (or 0 ... 2**16 or 0 .. (2**16 - 1) if you prefer). A quick look at Magick::Pixel might be a good starting point for tracing all this stuff out.

If you want your values to be inside the [0, 1] interval then scale them down using Magick::QuantumRange:

p.channel_mean(Magick::GrayChannel).map { |x| x / Magick::QuantumRange }
Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top