문제

I am trying to do some text mining with russian text using tm package and have some issues.

preprocessing speed heavily depends on encoding.

library(tm)
rus_txt<-paste(readLines('http://lib.ru/LITRA/PUSHKIN/dubrowskij.txt',encoding='cp1251'), collapse=' ')
object.size(rus_txt)
eng_txt<-paste(readLines('http://www.gutenberg.org/cache/epub/1112/pg1112.txt',encoding='UTF-8'), collapse=' ')
object.size(eng_txt)
# text sizes nearly identical
rus_txt_utf8<-iconv(rus_txt, to='UTF-8')
system.time(rus_txt_lower<-tolower(rus_txt_utf8))
#3.17         0.00         3.19 
system.time(rus_txt_lower<-tolower(eng_txt))
#0.03         0.00         0.03
system.time(rus_txt_lower<-tolower(rus_txt))
#0.07         0.00         0.08

40 times faster! and on large corporas difference was up to 500 times!

Lets try to tokenize some text (this function used in TermDocumentMatrix):

some_text<-"Несколько  лет  тому  назад  в  одном  из своих  поместий жил старинный
русской барин, Кирила Петрович Троекуров. Его богатство, знатный род и связи
давали ему большой вес в губерниях, где  находилось его имение.  Соседи рады
были угождать малейшим его прихотям; губернские чиновники трепетали  при его
имени;  Кирила  Петрович принимал знаки  подобострастия как надлежащую дань;
дом его  всегда был полон  гостями, готовыми тешить  его барскую праздность,
разделяя  шумные,  а  иногда  и  буйные  его  увеселения.  Никто  не  дерзал
отказываться от его приглашения, или в известные  дни не являться  с должным
почтением в село  Покровское."
scan_tokenizer(some_text)
#[1] "Несколько"  "лет"        "тому"       "назад"      "в"          "одном"      "из"         "своих"     
# [9] "поместий"   "жил"        "старинный"  "русской"    "барин,"     "Кирила"     "Петрович"   "Троекуров."
#[17] "Его"        "богатство," "знатный"    "род"        "и"          "св" 

oops... Seems R core function scan() see russian lower case letter 'я' as EOF. I tried diffrent encodings but I haven't answer how to fix this.

Ok lets try to remove punctuation:

removePunctuation("жил старинный русской барин, Кирила Петрович Троекуров")
#"жил старинный русской барин Кирила Петрови Троекуров"

Hmm...where is letter 'ч'? Ok with UTF-8 encoding this works fine, but it took some time to found it. also I had issue with removeWords() function perfomance but can't reproduce it. Main question is: How to read and tokenize texts with letter 'я'? my locale:

Sys.getlocale()
#[1] "LC_COLLATE=Russian_Russia.1251;LC_CTYPE=Russian_Russia.1251;LC_MONETARY=Russian_Russia.1251;LC_NUMERIC=C;LC_TIME=Russian_Russia.1251"
도움이 되었습니까?

해결책

1) Question: How to read and tokenize texts with letter 'я'? Answer: try to write your own tokenizer and use it. For example:

my_tokenizer <- function (x) 
{
  strsplit(iconv(x, to='UTF-8'), split='([[:space:]]|[[:punct:]])+', perl=F)[[1]]
}
 TDM <- TermDocumentMatrix(corpus,control=list(tokenize=my_tokenizer, weighting=weightTf, wordLengths = c(3,10)))

2) Performance heavily depend on... performance of tolower function. May be this is a bug, I don't know, but on every time you call it you have to convert your text into native encoding using enc2native. (of course if your text language is not english).

doc.corpus <- Corpus(VectorSource(enc2native(textVector)))

And moreover after all text preprocessing on your corpus you have to convert it again. (this is because TermDocumentMatrix and many other function in tm package internally use tolower)

tm_map(doc.corpus, enc2native)

So your full flow will look like something like this:

createCorp <-function(textVector)
{
  doc.corpus <- Corpus(VectorSource(enc2native(textVector)))
  doc.corpus <- tm_map(doc.corpus, tolower)
  doc.corpus <- tm_map(doc.corpus, removePunctuation)
  doc.corpus <- tm_map(doc.corpus, removeWords, stopwords("russian"))
  doc.corpus <- tm_map(doc.corpus, stemDocument, "russian")
  doc.corpus <- tm_map(doc.corpus, stripWhitespace)
  return(tm_map(doc.corpus, enc2native))
}
my_tokenizer <- function (x) 
{
  strsplit(iconv(x, to='UTF-8'), split='([[:space:]]|[[:punct:]])+', perl=F)[[1]]
}
TDM <- TermDocumentMatrix(corpus,control=list(tokenize=my_tokenizer, weighting=weightTf, wordLengths = c(3,10)))
라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top