To simplify, let's assume you're testing on some balanced set. Half the testing data is positive and half the testing data is negative.
I would say that something strange is happening that is flipping the sign of your decision. That classifier you're evaluating is very useful, but you would need to flip the decision it makes. You should probably check your code to make sure you're not flipping the class of the training data. Some libraries (LIBSVM for example) require that the first training example is from the positive class.
To summarize: It seems the features you're selecting are useful, but it seems you have a bug that is flipping the classes.