Question

I have a list of p-values and I would like to calculate the adjust p-values for multiple comparisons for the FDR. In R, I can use:

pval <- read.csv("my_file.txt",header=F,sep="\t")
pval <- pval[,1]
FDR <- p.adjust(pval, method= "BH")
print(length(pval[FDR<0.1]))
write.table(cbind(pval, FDR),"pval_FDR.txt",row.names=F,sep="\t",quote=F )

How can I implement this code in Python? Here was my feable attempt in Python with the help of Google:

pvalue_list [2.26717873145e-10, 1.36209234286e-11 , 0.684342083821...] # my pvalues
pvalue_lst = [v.r['p.value'] for v in pvalue_list]
p_adjust = R.r['p.adjust'](R.FloatVector(pvalue_lst),method='BH')
for v in p_adjust:
    print v

The above code throws an AttributeError: 'float' object has no attribute 'r' error. Can anyone help point out my problem? Thanks in advance for the help!

Was it helpful?

Solution

If you wish to be sure of what you are getting from R, you can also indicate that you wish to use the function in the R package 'stats':

from rpy2.robjects.packages import importr
from rpy2.robjects.vectors import FloatVector

stats = importr('stats')

p_adjust = stats.p_adjust(FloatVector(pvalue_list), method = 'BH')

OTHER TIPS

This question is a bit old, but there are multiple comparison corrections available in statsmodels for Python. We have

http://statsmodels.sourceforge.net/devel/generated/statsmodels.sandbox.stats.multicomp.multipletests.html#statsmodels.sandbox.stats.multicomp.multipletests

Here is an in-house function I use:

def correct_pvalues_for_multiple_testing(pvalues, correction_type = "Benjamini-Hochberg"):                
    """                                                                                                   
    consistent with R - print correct_pvalues_for_multiple_testing([0.0, 0.01, 0.029, 0.03, 0.031, 0.05, 0.069, 0.07, 0.071, 0.09, 0.1]) 
    """
    from numpy import array, empty                                                                        
    pvalues = array(pvalues) 
    n = float(pvalues.shape[0])                                                                           
    new_pvalues = empty(n)
    if correction_type == "Bonferroni":                                                                   
        new_pvalues = n * pvalues
    elif correction_type == "Bonferroni-Holm":                                                            
        values = [ (pvalue, i) for i, pvalue in enumerate(pvalues) ]                                      
        values.sort()
        for rank, vals in enumerate(values):                                                              
            pvalue, i = vals
            new_pvalues[i] = (n-rank) * pvalue                                                            
    elif correction_type == "Benjamini-Hochberg":                                                         
        values = [ (pvalue, i) for i, pvalue in enumerate(pvalues) ]                                      
        values.sort()
        values.reverse()                                                                                  
        new_values = []
        for i, vals in enumerate(values):                                                                 
            rank = n - i
            pvalue, index = vals                                                                          
            new_values.append((n/rank) * pvalue)                                                          
        for i in xrange(0, int(n)-1):  
            if new_values[i] < new_values[i+1]:                                                           
                new_values[i+1] = new_values[i]                                                           
        for i, vals in enumerate(values):
            pvalue, index = vals
            new_pvalues[index] = new_values[i]                                                                                                                  
    return new_pvalues

Using Python's numpy library, without calling out to R at all, here's a reasonably efficient implementation of the BH method:

import numpy as np

def p_adjust_bh(p):
    """Benjamini-Hochberg p-value correction for multiple hypothesis testing."""
    p = np.asfarray(p)
    by_descend = p.argsort()[::-1]
    by_orig = by_descend.argsort()
    steps = float(len(p)) / np.arange(len(p), 0, -1)
    q = np.minimum(1, np.minimum.accumulate(steps * p[by_descend]))
    return q[by_orig]

(Based on the R code BondedDust posted)

(I know this is not the answer... just trying to be helpful.) The BH code in R's p.adjust is just:

BH = {
        i <- lp:1L   # lp is the number of p-values
        o <- order(p, decreasing = TRUE) # "o" will reverse sort the p-values
        ro <- order(o)
        pmin(1, cummin(n/i * p[o]))[ro]  # n is also the number of p-values
      }

Old question, but here's a translation of the R FDR code in python (which is probably fairly inefficient):

def FDR(x):
    """
    Assumes a list or numpy array x which contains p-values for multiple tests
    Copied from p.adjust function from R  
    """
    o = [i[0] for i in sorted(enumerate(x), key=lambda v:v[1],reverse=True)]
    ro = [i[0] for i in sorted(enumerate(o), key=lambda v:v[1])]
    q = sum([1.0/i for i in xrange(1,len(x)+1)])
    l = [q*len(x)/i*x[j] for i,j in zip(reversed(xrange(1,len(x)+1)),o)]
    l = [l[k] if l[k] < 1.0 else 1.0 for k in ro]
    return l

Well, to get your code working, I would guess something like this would work:

import rpy2.robjects as R

pvalue_list = [2.26717873145e-10, 1.36209234286e-11 , 0.684342083821...] # my pvalues
p_adjust = R['p.adjust'](R.FloatVector(pvalue_list),method='BH')
for v in p_adjust:
    print v

If p.adjust is simple enough, you could write it in Python so you avoid the need to call into R. And if you want to use it a lot, you can make a simple Python wrapper:

def adjust_pvalues(pvalues, method='BH'):
    return R['p.adjust'](R.FloatVector(pvalues), method=method)
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top