If I were doing it, instead of making each pixel fall into one of several buckets, I'd see how far each hue was from each color. For instance, you could take the difference from each "target hue", and sum up all the differences. Whichever has the lowest total after looping through all the pixels should be the "most used". Of course it's not perfect, but naming colors is not a trivial task for a computer.
For example, to get the total for "green" (hue 120 in my arbitrary world):
float runningTotalForGreen = 0;
for(Float3 hsv: hsvs)
{
float diff = (hsv.x - 120) % 180; // mod 180 to normalize cw/ccw on the wheel
runningTotalForGreen += diff; // lower = closer to green
}
You'll probably want to store your target colors and totals in arrays for easier looping, but that's the general idea.
Edit:
The reason I think this will work better than "buckets" is this: Consider a picture, where about 35% of the picture is red. The other 65% is somewhere on the border of light/dark blue. So, say 34% falls into the light blue bucket, 31% falls into dark blue. Your method says it's red, since 35% is greater than both. Using the hue difference will most likely return one of the blues.
Of course, most methods will have some image that it fails for. The key is finding the one that errors least. That depends a lot on what type of image it is. I do agree you'll need special handling for certain colors(black/white/brown/etc).
The main issue here is that it's a human problem. For my example image above, some people would say light blue. Some would say dark blue. Some might even say red, depending on how vivid/contrasting it is, especially if the blue was the background with red in the fore. Saying "this picture is x color" isn't ever consistent unless it's basically monochrome.