It is possible that there is a tokenization bug here, but it is hard to tell.
What you (should) have here is a phrase query for "\" (wildcard-word) "\" "漢" "\" and (wildcard-word) in that order. It is punctuation sensitive. Do you have an example of some content you think should match?
What does the query plan show you? What are your index settings?