Generally, calculating PMI is tricky since the formula will change depending on the size of the ngram that you want to take into consideration:
Mathematically, for bigrams, you can simply consider:
log(p(a,b) / ( p(a) * p(b) ))
Programmatically, let's say you have calculated all the frequencies of the unigrams and bigrams in your corpus, you do this:
def pmi(word1, word2, unigram_freq, bigram_freq):
prob_word1 = unigram_freq[word1] / float(sum(unigram_freq.values()))
prob_word2 = unigram_freq[word2] / float(sum(unigram_freq.values()))
prob_word1_word2 = bigram_freq[" ".join([word1, word2])] / float(sum(bigram_freq.values()))
return math.log(prob_word1_word2/float(prob_word1*prob_word2),2)
This is a code snippet from an MWE library but it's in its pre-development stage (https://github.com/alvations/Terminator/blob/master/mwe.py). But do note that it's for parallel MWE extraction, so here's how you can "hack" it to extract monolingual MWE:
$ wget https://dl.dropboxusercontent.com/u/45771499/mwe.py
$ printf "This is a foo bar sentence .\nI need multi-word expression from this text file.\nThe text file is messed up , I know you foo bar multi-word expression thingy .\n More foo bar is needed , so that the text file is populated with some sort of foo bar bigrams to extract the multi-word expression ." > src.txt
$ printf "" > trg.txt
$ python
>>> import codecs
>>> from mwe import load_ngramfreq, extract_mwe
>>> # Calculates the unigrams and bigrams counts.
>>> # More superfluously, "Training a bigram 'language model'."
>>> unigram, bigram, _ , _ = load_ngramfreq('src.txt','trg.txt')
>>> sent = "This is another foo bar sentence not in the training corpus ."
>>> for threshold in range(-2, 4):
... print threshold, [mwe for mwe in extract_mwe(sent.strip().lower(), unigram, bigram, threshold)]
[out]:
-2 ['this is', 'is another', 'another foo', 'foo bar', 'bar sentence', 'sentence not', 'not in', 'in the', 'the training', 'training corpus', 'corpus .']
-1 ['this is', 'is another', 'another foo', 'foo bar', 'bar sentence', 'sentence not', 'not in', 'in the', 'the training', 'training corpus', 'corpus .']
0 ['this is', 'foo bar', 'bar sentence']
1 ['this is', 'foo bar', 'bar sentence']
2 ['this is', 'foo bar', 'bar sentence']
3 ['foo bar', 'bar sentence']
4 []
For further details, i find this thesis an quick and easy introduction to MWE extraction: "Extending the Log Likelihood Measure to Improve Collocation Identification", see http://goo.gl/5ebTJJ