Pergunta

I am learning Latent semantic analysis (LSA) and I am able to construct term-document matrix and find its SVD decomposition. How can I get the topics from that decomposition?

For example, in gensim:

topic #0(332.762): 0.425*"utc" + 0.299*"talk" + 0.293*"page" + 0.226*"article" + 0.224*"delete" + 0.216*"discussion" + 0.205*"deletion" + 0.198*"should" + 0.146*"debate" + 0.132*"be"
topic #1(201.852): 0.282*"link" + 0.209*"he" + 0.145*"com" + 0.139*"his" + -0.137*"page" + -0.118*"delete" + 0.114*"blacklist" + -0.108*"deletion" + -0.105*"discussion" + 0.100*"diff"
topic #2(191.991): -0.565*"link" + -0.241*"com" + -0.238*"blacklist" + -0.202*"diff" + -0.193*"additions" + -0.182*"users" + -0.158*"coibot" + -0.136*"user" + 0.133*"he" + -0.130*"resolves"
Foi útil?

Solução

You can get the U, S and V matrices of your SVD decomposition: https://github.com/piskvorky/gensim/wiki/Recipes-&-FAQ#wiki-q4-how-do-you-output-the-u-s-vt-matrices-of-lsi

EDIT Answering the question from the comment:

The printed topics are simply vectors from the matrix U (=the left singular vectors), normalized to unit length.

Perhaps the tutorial at http://radimrehurek.com/gensim/tut2.html#transforming-vectors may help.

What is actually printed are the top-N words that contribute the most to that particular topic (default=print top 10 words).

You can see the exact way these topics are computed here, it's rather straightforward: https://github.com/piskvorky/gensim/blob/0.8.9/gensim/models/lsimodel.py#L447

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top