In general the p-value indicates how probable a given outcome or a more extreme outcome is under the null hypothesis. In your case of feature selection, the null hypothesis is something like this feature contains no information about the prediction target, where no information is to be interpreted in the sense of the scoring method: If your scoring method tests e.g. univariate linear interaction (f_classif
, f_regression
in sklearn.feature_selection
are options for your scoring function), then the null hypothesis says that this linear interaction is not present.
TL;DR The p-value of a feature selection score indicates the probability that this score or a higher score would be obtained if this variable showed no interaction with the target.
Another general statement: scores are better if greater, p-values are better if smaller (and losses are better if smaller)