I finally realized that tokenization/segmentation is not included in this pos tagger. It appears the words must be space delimited before feeding them to the tagger. For those interested in maximum entropy word segmentation of Chinese, there is a separate package available here:
http://nlp.stanford.edu/software/segmenter.shtml
Thanks everyone.