The problem you are facing is often called the "consensus" among classifiers. As multilabel MaxEnt can be seen as N independent classifiers, you can think about it as a group of models "voting" for different classes.
Now, there are many measures of calculating such "consensus", including:
- "naive" calculation of the margin - difference between the "winning" class probability and the second one - bigger the margin - more confident the classification
- entropy - smaller the entropy of the resulting probability distribution, the more confident the decision
- some further methods involving KL divergence etc.
In general you should think about methods od detecting "uniformity" of the resulting distribution (impling less confident decison) or "spikeness" (indicating more confident classification).