문제

I've created custom trigram analyzer for fuzzy match for my project (NGramTokenizer(Version.LUCENE_44, reader, 3, 3)) -- specifying token size min 3 and max 3

During index time I am getting proper trigram tokens but when I use same analyzer during query time (by QueryParser) its skipping tokens which are less then 3 chars.

Example

Indexed Document - Hi Rushik

Indexed Tri-grams - hi_, i_r, rus, ush, shi, hik (checked it using Luke index reader)

Query - Hi Rushik AB XYZ.

Parsed Query (QueryParser result) (name_data:rus name_data:ush name_data:shi name_data:hik) name_data:xyz

As you can see, query parser removed tokens which are less then 3 chars. I understand I specified 3,3 during tokenizing but in that case indexing also should've skipped tokens less then 3 count?

I think I am missing something here, any help?

도움이 되었습니까?

해결책

Got the Answer..

Lucene QueryParser first tokenize data by White Spaces and then analyze individual terms/tokens with analyzer. As my analyzer is NGram(3,3) it cannot generate any token on term/token of 2 chars.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top