It looks like you haven't defined a mapping at all, which means elasticsearch will guess off your datatypes and use the standard mappings.
For the field macaddr, this will be recognised as a string and the standard string analyzer will be used. This analyzer will break up the string on whitespace and punctuation, leaving you with tokens consisting of pairs of numbers. e.g. "00:19:92:00:71:80"
will get tokenized to 00
19
92
00
71
80
. When you search the same tokenization will happen.
What you want is to define an analyzer which turns "00:19:92:00:71:80"
into the tokens 00
00:
00:1
00:19
etc...
Try this:
curl -XPUT http://localhost:9200/ap-test -d '
{
"settings" : {
"analysis" : {
"analyzer" : {
"my_edge_ngram_analyzer" : {
"tokenizer" : "my_edge_ngram_tokenizer"
}
},
"tokenizer" : {
"my_edge_ngram_tokenizer" : {
"type" : "edgeNGram",
"min_gram" : "2",
"max_gram" : "17"
}
}
}
}
}'
curl -XPUT http://localhost:9200/ap-test/devices/_mapping -d '
{
"devices": {
"properties" {
"user": {
"type": "string"
},
"macaddr": {
"type": "string",
"index_analyzer" : "my_edge_ngram_analyzer",
"search_analyzer": "keyword"
}
}
}
}'
Put the documents as before, then search with the query specifically aimed at the field:
curl -XPOST http://localhost:9200/ap-test/devices/_search -d '
{
"query" : {
"query_string" : {
"query":"\"00\\:19\\:92\\:00\\:71\\:80\"",
"fields": ["macaddr", "user"]
}
}
}'
As for your last question, the text
query is deprecated.
Good luck!