ISO 10646 and Unicode only include big-endian and little-endian UCS-4/UTF-32, not middle-endian. To my knowledge, no software in existence uses these encodings, they are practically irrelevant. Why then does the XML standard mention it? I don't know, but I guess mentioning it was driven by a desire for theoretical completeness rather than any practical value; the same likely applies to character detection/conversion software which includes support for it.
Historically, there have been some systems which have used middle-endian byte order; PDP-11s use the 3412 format to store 32-bit numbers. So if you were to try to process UCS-4/UTF-32 on a PDP-11, the UCS-4-3412 format might be useful. But in practice, no one tries to do that, since PDP-11s were past their heyday by the time Unicode arrived; and since PDP-11s are only 16-bit machines, UCS-4 is not the best Unicode format to use with them.